182 research outputs found

    Fusion of facial regions using color information in a forensic scenario

    Full text link
    Comunicación presentada en: 18th Iberoamerican Congress on Pattern Recognition, CIARP 2013; Havana; Cuba; 20-23 November 2013The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-41827-3_50This paper reports an analysis of the benefits of using color information on a region-based face recognition system. Three different color spaces are analysed (RGB, YCbCr, lαβ) in a very challenging scenario matching good quality mugshot images against video surveillance images. This scenario is of special interest for forensics, where examiners carry out a comparison of two face images using the global information of the faces, but paying special attention to each individual facial region (eyes, nose, mouth, etc.). This work analyses the discriminative power of 15 facial regions comparing both the grayscale and color information. Results show a significant improvement of performance when fusing several regions of the face compared to just using the whole face image. A further improvement of performance is achieved when color information is consideredThis work has been partially supported by contract with Spanish Guardia Civil and projects BBfor2 (FP7-ITN-238803), bio-Challenge (TEC2009-11186), Bio Shield (TEC2012-34881), Contexts (S2009/TIC-1485), TeraSense (CSD2008-00068) and "Cátedra UAM-Telefónica

    Identification using face regions: Application and assessment in forensic scenarios

    Full text link
    This is the author’s version of a work that was accepted for publication in Forensic Science International. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Forensic Science International, 23, 1-3, (2013) DOI: 10.1016/j.forsciint.2013.08.020This paper reports an exhaustive analysis of the discriminative power of the different regions of the human face on various forensic scenarios. In practice, when forensic examiners compare two face images, they focus their attention not only on the overall similarity of the two faces. They carry out an exhaustive morphological comparison region by region (e.g., nose, mouth, eyebrows, etc.). In this scenario it is very important to know based on scientific methods to what extent each facial region can help in identifying a person. This knowledge obtained using quantitative and statical methods on given populations can then be used by the examiner to support or tune his observations. In order to generate such scientific knowledge useful for the expert, several methodologies are compared, such as manual and automatic facial landmarks extraction, different facial regions extractors, and various distances between the subject and the acquisition camera. Also, three scenarios of interest for forensics are considered comparing mugshot and Closed-Circuit TeleVision (CCTV) face images using MORPH and SCface databases. One of the findings is that depending of the acquisition distances, the discriminative power of the facial regions change, having in some cases better performance than the full face

    Face recognition at a distance: Scenario analysis and applications

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-14883-5_44Proceedings of the International Symposium of Distributed Computing and Artificial Intelligence held in Valencia (Spain).Face recognition is the most popular biometric used in applications at a distance, which range from high security scenarios such as border control to others such as video games. This is a very challenging task since there are many varying factors (illumination, pose, expression, etc.) This paper reports an experimental analysis of three acquisition scenarios for face recognition at a distance, namely: close, medium, and far distance between camera and query face, the three of them considering templates enrolled in controlled conditions. These three representative scenarios are studied using data from the NIST Multiple Biometric Grand Challenge, as the first step in order to understand the main variability factors that affect face recognition at a distance based on realistic yet workable and widely available data. The scenario analysis is conducted quantitatively in two ways. First, an analysis of the information content in segmented faces in the different scenarios. Second, an analysis of the performance across scenarios of three matchers, one commercial, and two other standard approaches using popular features (PCA and DCT) and matchers (SVM and GMM). The results show to what extent the acquisition setup impacts on the verification performance of face recognition at a distance.This work has been supported by project Contexts (S2009/TIC-1485)

    Comparative analysis of the variability of facial landmarks for forensics using CCTV images

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-53842-1_35Proceedings of the 6th Pacific-Rim Symposium, PSIVT 2013, Guanajuato, Mexico, October 28-November 1, 2013.This paper reports a study of the variability of facial landmarks in a forensic scenario using images acquired from CCTV images. This type of images presents a very low quality and a large range of variability factors such as differences in pose, expressions, occlusions, etc. Apart from this, the variability of facial landmarks is affected by the precision in which the landmarks are tagged. This process can be done manually or automatically depending on the application (e.g., forensics or automatic face recognition, respectively). This study is carried out comparing both manual and automatic procedures, and also 3 distances between the camera and the subjects. Results show that landmarks located in the outer part of the face (highest end of the head, ears and chin) present a higher level of variability compared to the landmarks located the inner face (eye region, and nose). This study shows that the landmark variability increases with the distance between subject and camera, and also the results of the manual and automatic approaches are similar for the inner facial landmarks.This work has been partially supported by a contract with Spanish Guardia Civil and projects BBfor2 (FP7-ITN-238803), Bio-Shield (TEC2012-34881), Contexts (S2009/TIC-1485), TeraSense (CSD2008-00068) and “Catedra UAM-Telefonica”

    Pose variability compensation using projective transformation for forensic face recognition

    Full text link
    Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. E. Gonzalez-Sosa, R. Vera-Rodriguez, J. Fierrez, P. Tome and J. Ortega-Garcia, "Pose Variability Compensation Using Projective Transformation for Forensic Face Recognition," Biometrics Special Interest Group (BIOSIG), 2015 International Conference of the, Darmstadt, 2015, pp. 1-5. doi: 10.1109/BIOSIG.2015.7314615The forensic scenario is a very challenging problem within the face recognition community. The verification problem in this case typically implies the comparison between a high quality controlled image against a low quality image extracted from a close circuit television (CCTV). One of the downsides that frequently presents this scenario is pose deviation since CCTV devices are usually placed in ceilings and the subject normally walks facing forward. This paper proves the value of the projective transformation as a simple tool to compensate the pose distortion present in surveillance images in forensic scenarios. We evaluate the influence of this projective transformation over a baseline system based on principal component analysis and support vector machines (PCA-SVM) for the SCface database. The application of this technique improves greatly the performance, being this improvement more striking with closer images. Results suggest the convenience of this transformation within the preprocessing stage of all CCTV images. The average relative improvement reached with this method is around 30% of EER.This work has been partially supported in part by Bio-Shield (TEC2012-34881) from Spanish MINECO, in part by BEAT (FP7-SEC-284989) from EU and in part by Cátedra UAM-Telefónica. E. Gonzalez-Sosa is supported by a PhD scholarship from Universidad Autonoma de Madrid

    Acquisition scenario analysis for face recognition at a distance

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-17289-2_44Proceedings of 6th International Symposium, ISVC 2010, Las Vegas, NV, (USA)An experimental analysis of three acquisition scenarios for face recognition at a distance is reported, namely: close, medium, and far distance between camera and query face, the three of them considering templates enrolled in controlled conditions. These three representative scenarios are studied using data from the NIST Multiple Biometric Grand Challenge, as the first step in order to understand the main variability factors that affect face recognition at a distance based on realistic yet workable and widely available data. The scenario analysis is conducted quantitatively in two ways. First, we analyze the information content in segmented faces in the different scenarios. Second, we analyze the performance across scenarios of three matchers, one commercial, and two other standard approaches using popular features (PCA and DCT) and matchers (SVM and GMM). The results show to what extent the acquisition setup impacts on the verification performance of face recognition at a distance.This work has been partially supported by projects Bio-Challenge (TEC2009-11186), Contexts (S2009/TIC-1485), TeraSense (CSD2008-00068) and "Cátedra UAM-Telefónica"

    Fusion of footsteps and face biometrics on an unsupervised and uncontrolled environment

    Full text link
    Ruben Vera-Rodriguez ; Pedro Tome ; Julian Fierrez ; Javier Ortega-Garcia, "Fusion of footsteps and face biometrics on an unsupervised and uncontrolled environment", Sensing Technologies for Global Health, Military Medicine, Disaster Response, and Environmental Monitoring II; and Biometric Technology for Human Identification IX, Proc. SPIE 8371 (May 1, 2012); doi:10.1117/12.918550. Copyright 2012 Society of Photo‑Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.Proceedings of the II Sensing Technologies for Global Health, Military Medicine, Disaster Response, and Environmental Monitoring; and IX Biometric Technology for Human Identification (Baltimore, Maryland, USA)This paper reports for the first time experiments on the fusion of footsteps and face on an unsupervised and not controlled environment for person authentication. Footstep recognition is a relatively new biometric based on signals extracted from people walking over floor sensors. The idea of the fusion between footsteps and face starts from the premise that in an area where footstep sensors are installed it is very simple to place a camera to capture also the face of the person that walks over the sensors. This setup may find application in scenarios like ambient assisted living, smart homes, eldercare, or security access. The paper reports a comparative assessment of both biometrics using the same database and experimental protocols. In the experimental work we consider two different applications: smart homes (small group of users with a large set of training data) and security access (larger group of users with a small set of training data) obtaining results of 0.9% and 5.8% EER respectively for the fusion of both modalities. This is a significant performance improvement compared with the results obtained by the individual systems.This work has been partially supported by projects Contexts (S2009/TIC-1485), Bio-Challenge (TEC2009-11186) and ”Catedra UAM-Telefonica”

    Analysis of gait recognition on constrained scenarios with limited data information

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-38061-7_23Proceedings of International Workshops of Practical Applications of Agents and Multi-Agent Systems (PAAMS), held in 2013, Salamanca (Spain).This paper is focused on the assessment of gait recognition on a constrained scenario, where limited information can be extracted from the gait image sequences. In particular we are interested in assessing the performance of gait images when only the lower part of the body is acquired by the camera and just half of a gait cycle is available (SFootBD database). Thus, various state-of-the-art feature approaches have been followed and applied to the data. Results show that good recognition performance can be achieved using such limited data information for gait biometric. A comparative analysis of the influence of the quantity of data used in the training models has been carried out obtaining results of 8.6% EER for the case of using 10 data samples to train the models, and 5.7% of EER for the case of using 40 data for training. Also, a comparison with a standard and ideal gait database (USF database) is also carried out using similar experimental protocols. In this case 10 data samples are used for training achieving results of 3.6% EER. The comparison with a standard database shows that different feature approaches perform differently for each database, achieving best individual results with MPCA and EGEI methods for the SFootBD and the USF databases respectivelyThis work has been supported by projects Contexts (S2009/TIC-1485), Bio-Challenge (TEC2009-11186), Bio-Shield (TEC2012-34881) and "Cátedra UAM-Telefonica". Rubén Vera-Rodríguez is supported by a Juan de la Cierva Fellowship from the Spanish MINECO

    BioGiga: Base de datos de imágenes sintéticas de personas a 94 GHz con fines biométricos

    Full text link
    Versión electrónica de la ponencia presentada en el XXVI Simposium Nacional de Union Cientifica Internacional de Radio, URSI 2011, celebrado en Madrid.The baseline corpus of a new database, called BioGiga, acquired in the framework of the Terasense Consolider Project, is presented. The corpus consists of synthetic images at 94 GHz of the body of 50 individuals. The images are the result of simulations carried out on corporal models at two types of scenarios (outdoors, indoors) and with two kinds of imaging systems (passive and active). These corporal models were previously generated based on body measurements taken from the subjects. In this contribution, the methodology followed and the tools used to generate the database are outlined. Furthermore, the contents of the corpus (data and statistics) as well as its applications are described.Este trabajo ha sido financiado parcialmente por los proyectos Bio-Challenge (TEC2009-11186), Contexts (S2009/TIC- 1485), TeraSense (CSD2008-00068) y ”Cátedra UAM-Telefónica"

    3-D modelling of a fossil tufa outcrop. The example of La Peña del Manto (Soria, Spain)

    Get PDF
    [EN]Classical studies of tufas lack quantitative outcrop descriptions and facies models, and normally do not integrate data from subsurface in the stratigraphic and evolutive analysis. This paper describes themethodology followed to construct one of the first digital outcrop models of fossil tufas. This model incorporates 3-D lines and surfaces obtained from a terrestrial laser scanner, electric resistivity tomography (ERT) profiles, and stratigraphic and sedimentologic data from 18 measured sections. This study has identified seven sedimentary units (from SU-1 to SU-7) which are composed of tufa carbonates (SU-1; 3; 5; 6) and clastics (SU-2; 4; 7). Facies identified occur in different proportions: phytoherm limestones of bryophytes represent 43% of tufa volume, bioclastic limestones 20%, phytoherm limestones of stems 12%, oncolitic limestones 8%, and clastics 15%. Three main architectural elements have been identified: 1) Steeply dipping strata dominated by phytoherm limestones of bryophytes; 2) gently dipping strata dominated by phytoherm limestones of stems; and 3) horizontal strata dominated by bioclastic and oncoid limestones. The alternation of tufa growth and clastic input stages is interpreted as the result of climatic changes during Mid–Late Pleistocene.18.KA4A-463 A.C.01, Universidad de Salamanca CGL2014-54818-P of the Ministerio de Economía y Competitividad (MINECO)
    corecore